1,330 research outputs found

    Catch Me If You Can: Using Power Analysis to Identify HPC Activity

    Full text link
    Monitoring users on large computing platforms such as high performance computing (HPC) and cloud computing systems is non-trivial. Utilities such as process viewers provide limited insight into what users are running, due to granularity limitation, and other sources of data, such as system call tracing, can impose significant operational overhead. However, despite technical and procedural measures, instances of users abusing valuable HPC resources for personal gains have been documented in the past \cite{hpcbitmine}, and systems that are open to large numbers of loosely-verified users from around the world are at risk of abuse. In this paper, we show how electrical power consumption data from an HPC platform can be used to identify what programs are executed. The intuition is that during execution, programs exhibit various patterns of CPU and memory activity. These patterns are reflected in the power consumption of the system and can be used to identify programs running. We test our approach on an HPC rack at Lawrence Berkeley National Laboratory using a variety of scientific benchmarks. Among other interesting observations, our results show that by monitoring the power consumption of an HPC rack, it is possible to identify if particular programs are running with precision up to and recall of 95\% even in noisy scenarios

    ASLR: How Robust Is the Randomness?

    Full text link
    This paper examines the security provided by different implementations of Address Space Layout Randomization (ASLR). ASLR is a security mechanism that increases control-flow integrity by making it more difficult for an attacker to properly execute a buffer-overflow attack, even in systems with vulnerable software. The strength of ASLR lies in the randomness of the offsets it produces in memory layouts. We compare multiple operating systems, each compiled for two different hardware architectures, and measure the amount of entropy provided to a vulnerable application. Our paper is the first publication that we are aware of that quantitatively compares the entropy of different ASLR implementations. In addition, we provide a method for remotely assessing the efficacy of a particular security feature on systems that are otherwise unavailable for analysis, and highlight the need for independent evaluation of security mechanisms

    The medical science DMZ: a network design pattern for data-intensive medical science

    Get PDF
    Abstract: Objective We describe a detailed solution for maintaining high-capacity, data-intensive network flows (eg, 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations. Materials and Methods High-end networking, packet-filter firewalls, network intrusion-detection systems. Results We describe a “Medical Science DMZ” concept as an option for secure, high-volume transport of large, sensitive datasets between research institutions over national research networks, and give 3 detailed descriptions of implemented Medical Science DMZs. Discussion The exponentially increasing amounts of “omics” data, high-quality imaging, and other rapidly growing clinical datasets have resulted in the rise of biomedical research “Big Data.” The storage, analysis, and network resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large datasets. Maintaining data-intensive flows that comply with the Health Insurance Portability and Accountability Act (HIPAA) and other regulations presents a new challenge for biomedical research. We describe a strategy that marries performance and security by borrowing from and redefining the concept of a Science DMZ, a framework that is used in physical sciences and engineering research to manage high-capacity data flows. Conclusion By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements

    Differential Privacy for Class-based Data: A Practical Gaussian Mechanism

    Full text link
    In this paper, we present a notion of differential privacy (DP) for data that comes from different classes. Here, the class-membership is private information that needs to be protected. The proposed method is an output perturbation mechanism that adds noise to the release of query response such that the analyst is unable to infer the underlying class-label. The proposed DP method is capable of not only protecting the privacy of class-based data but also meets quality metrics of accuracy and is computationally efficient and practical. We illustrate the efficacy of the proposed method empirically while outperforming the baseline additive Gaussian noise mechanism. We also examine a real-world application and apply the proposed DP method to the autoregression and moving average (ARMA) forecasting method, protecting the privacy of the underlying data source. Case studies on the real-world advanced metering infrastructure (AMI) measurements of household power consumption validate the excellent performance of the proposed DP method while also satisfying the accuracy of forecasted power consumption measurements.Comment: Under review in IEEE Transactions on Information Forensics & Securit

    ByzID: Byzantine Fault Tolerance from Intrusion Detection

    Full text link
    Building robust network services that can withstand a wide range of failure types is a fundamental problem in distributed systems. The most general approach, called Byzantine fault tolerance, can mask arbitrary failures. Yet it is often considered too costly to deploy in practice, and many solutions are not resilient to performance attacks. To address this concern we leverage two key technologies already widely deployed in cloud computing infrastructures: replicated state machines and intrusiondetection systems.First, we have designed a general framework for constructing Byzantine failure detectors based on an intrusion detection system. Based on such a failure detector, we have designed and built a practical Byzantine fault-tolerant protocol, which has costs comparable to crash-resilient protocols like Paxos. More importantly, our protocol is particularly robust against several key attacks such as flooding attacks, timing attacks, and fairness attacks, that are typically not handled well by Byzantine fault masking procedures

    Resolving the Unexpected in Elections: Election Officials\u27 Options

    Get PDF
    This paper seeks to assist election officials and their lawyers in effectively handling the technical issues that can be difficult to understand and analyze, allowing them to protect themselves and the public interest from unfair accusations, inaccuracies in results, and conspiracy theories. The paper helps to empower officials to recognize which types of voting system events and indicators need a more structured analysis and what steps to take to set up the evaluations (or forensic assessments) using computer experts

    Principles of authentication

    Full text link
    In the real world we do authentication hundreds of times a day with little effort and strong confidence. We believe that the digital world can and should catch up. The focus of this paper is about authentication for critical applications. Specifically, it is about the fundamentals for evaluating whether or not someone is who they say they are by using combinations of multiple meaningful and measurable input factors. We present a "gold standard" for authentication that builds from what we naturally and effortlessly do everyday in a face-to-face meeting. We also consider how such authentication systems can enable resilience to users under duress. This work differs from much of the other work in authentication first by focusing on authentication techniques that provide meaningful measures of confidence in identity and also by using a multifaceted approach that comprehensively integrates multiple factors into a continuous authentication system, without adding burdensome overhead to users.

    Automated Anomaly Detection in Distribution Grids Using uPMU Measurements

    Get PDF
    The impact of Phasor Measurement Units (PMUs) for providing situational awareness to transmission system operators \ has been widely documented. Micro-PMUs (uPMUs) \ are an emerging sensing technology that can provide similar \ benefits to Distribution System Operators (DSOs), enabling a \ level of visibility into the distribution grid that was previously \ unattainable. In order to support the deployment of these \ high resolution sensors, the automation of data analysis and \ prioritizing communication to the DSO becomes crucial. In this \ paper, we explore the use of uPMUs to detect anomalies on \ the distribution grid. Our methodology is motivated by growing \ concern about failures and attacks to distribution automation \ equipment. The effectiveness of our approach is demonstrated \ through both real and simulated data
    corecore